Machine learning and deep learning models help in medical imaging by analysing large amounts of clinical and imaging variable to provide personalised risk assessments and tailored management plans.
CDSS empowers doctors, nurses, patients, caregivers, pharmacists and others to make more informed decisions to deliver effective care.
NanoxAI using AI medical imaging technology to screen for early signs of chronic disease in large populations. This enables more accurate risk-adjustment so patients are provided with preventative care paths they need, while improving quality of health services at significantly lower associated costs.
Here is the guidance to help you develop business and a high-value use case for Medical Imaging using Artificial Intelligence and Machine Learning.
This Medical Imaging use case framework guidance describes Esdha's current research on the topic and should be viewed only as recommendations, unless specific regulatory or statutory requirements are cited.
Operational Impact: Poor data quality can affect the quality of decisions.
System monitoring & maintenance: Healthcare institutions have reported difficulty in monitoring and maintaining the knowledge base, algorithms, rules and data.
Lack validity: as users can become more reliant on Medical Imaging systems without questioning the accuracy of the recommendation provided.
Knowledge base: overall knowledge creation with the clear evidence base for incorporating recommendation is a challenge and requires specialist input from various care professionals.
Interdisciplinary team: We need an interdisciplinary team consisting of computer scientists and clinicians to align goals, requirements and clinical trial outcomes.
Accountability: There is a need for frameworks on medical malpractice liability for AI medical imaging.
Cost: due to lack of standardised metrics, it is hard to do cost benefit assessment as cost-effectiveness depends on a range of socio-economic factors including environment, political and technological.
Explainability: There is risk associated with misunderstanding recommendations or wrongfully assume causality as explanations are correlation-based, they can be susceptible to error due to random factors. Explainability is useful as long as the outputs are sufficiently accurate, validated and required by the user.
Wrong or misleading recommendation: can result in loss of trust or serious consequences.
Privacy & quality: adherence to data protection and privacy requirements such as the general data protection regulation (GDPR) will be essential. A standardised approach to data collection can help to address this risk.
Bias, overfitting and validity: build a rigorous criterion to evaluate for biases (such as statistical misrepresentation to the general population), overfitting, and validity.
Here are some of the questions to consider for business and use case development.
What do you think about 'medical imaging'?
What are your biggest challenges?
How do you think we can address the challenges?
Are there any barriers?
What are the regulations & legal requirements?
What are their expectations & intention of use?
Any conflict of interest?
What patient problem you are trying to address?
Current decision making process? Who is involved in the treatment?
What difference would AI achieve?
Are there any adverse consequences?
What the the different systems used?
Do you have access to the required data sources?
Is there a standardised approach for data collection?
How is the quality of data?
What is the current infrastructure?
What systems would you need access to?
Are there any restrictions?
What is the problem?
Is there a need to solve the problem?
What is the scope, boundaries & context?
Analysis of socio-technical scenarios
Would patient outcome be effective using AI?
Cost-benefit and risk-benefit analysis?
How would you safeguard privacy & comply with law?
Would misuse of data/ algorithm contribute to social/ ethical problems?
Map to trustworthy AI
Risks, ethical tensions & mitigations
What patient groups can be denied opportunities/ face negative consequences?
Do you have a multidisciplinary team?
Do you have access to AI experts for the project?
Do you have support from the executives, clinicians, patients, regulators & others?
Do you have a systems view of the architecture and data pipeline?
Do you have access to data?
How will your existing systems integrate?
What computing & data storage power do you need?
How will you monitor KPIs?
What is the infrastructure?
Any dependencies/ issues?
What would be the harm in providing the solution?
Data maintenance process
What is your value proposition?
Is your AI strategy aligned with the business strategy?
What are the future prospects & commercial viability?
Do you have the required finance for the project?
Does the financial forecast cover ongoing maintenance?
https://www.nanox.vision/ai